perm filename SEARLE[F87,JMC] blob sn#850880 filedate 1987-12-28 generic text, type T, neo UTF8
May 26

Themes from John Searle

1. Time-sharing and interpretation

2. Political movements and other memes as programs in the societal computer.

3. Artificial speech acts

4. The strong AI hypothesis and ascribing mental qualities to machines

5. Contingent identifications as an example of ambiguity tolerance.

Time-sharing, interpretation and variants of the ``Chinese room.''
Searle's ``Chinese room'' hypothesizes a man in a room equipped with a
book of rules for manipulating Chinese sentences and conducting a Chinese
conversation by using the rules to reply in Chinese to Chinese sentences
passed into the room on paper. Searle asks if we would then say that the
man necessarily understands Chinese. Nothing that the man might not
understand Chinese in any usual sense, Searle concludes that a computer
obeying rules doesn't know anything either, e.g. doesn't understand
Chinese.

Searle rejects the proposal that it's the system consisting of the man and
the rule book that understands Chinese, pointing out that the man might
have the rules in his memory, still without understanding Chinese in any
ordinary sense.

Even in some weakened sense of AI one still wants to say that specific
programs can do X rather than say that the machine can do X.

An Abstract Performative

	John Searle has extensively discussed performative speech acts
as human actions.  This note introduces the {\it abstract performative}
of {\it making a commitment} that is applicable to computer programs in
programming languages with {\it commitment statements}.  For programs
with commitment statements, there is an intrinsic notion of correctness.
Namely, a program is {\it intrinsically correct} if it always fulfills
its commitments.

	The example I have in mind is to program an airline reservation system
using a language with commitment statements.  When the program accepts
a reservation it commits itself to let the passenger occupy a specific seat
 on the airplane if the reservation isn't subsequently cancelled.  We
suppose that letting the passenger on the airplane is represented by
putting the seat assignment on the display used by the agent at the
departure gate.

	The advantage of a programming language with commitment
statements is that nothing need be stated in the source program, i.e.
the program that the programmer writes, about how the system is to
remember the reservation.  All that is up to the compiler program that
translates the source program into a lower level language, e.g. the
machine language of the computer being used.  Programs using commitment
statements will be easier to write and debug.  On the other hand, the
{\it compiler} (program that translates the {\it source program} in the
language that includes commitment statements into machine language) will
be more difficult to write.

	Moreover, programs with commitment statements may be modifiable
with less knowledge of the existing program.  This is important, because
most of the work that most programmers do is modifying existing programs
rather than writing entirely new ones.  This is slow work, because
modifying a program requires understanding to some extent what the
previous programmers have done.  This can be a very difficult and
error-prone task if the program consists of hundreds of thousands of
lines of code written by tens of long departed programmers over many
years.

	Correlated with the idea that programs using commitment statements
are more readily understandable and modifiable is the idea that human
behavior, including one's own behavior, is more readily understood,
taught and otherwise modified if we use suitable notions of commitment.

	Perhaps the notion of acts of abstract commitment can be used to
clarify some of the philosophical problems connected with promises and
other speech acts.  We are tempted to consider the notion of commitment
as not dependent on speech or other communication and not dependent on
whether the system committing itself is human or machine.  All that is
essential about the notion is that {\it correct} behavior includes
fullfilling commitments.


Distinguishing personalities from the bodies they inhabit

	Under ordinary common sense circumstances,  such distinctions
are unnecessary.  The sentence ``John was thinking about his troubles
and was hit by a car'' is considered ok, unlike ``John wrote the letter
in haste and red ink''.  The John that thought and the John that was
hit are regarded as the same.

	However, both in literature and in philosophy the distinction
is often made.  In literature we have R. L. Stevenson's ``Dr. Jekyll
and Mr. Hyde'' in which two personalities alternate controlling a
body.  In philosophy and religion, the  notion of soul makes the
distinction, though mostly there is just one soul associated with
a given body.

	In computer science and technology, there are often many
programs associated with a given piece of hardware.  It is instructive
to distinguish several cases.

	1. Parallel hardware.  Several processors share parts of
the system including power and communication.  They may be operating
distinct programs or working on different parts of the same program.

	2. Time-sharing.  Here a processor switches its attention
rapidly among all the processes that users want run.  The object
is to keep the switching rapid enough so that each user can behave
as if he has a computer of his own.

	3. Interpretation.  A program called an interpreter written in
on programming language takes a program written in another language
and executes its instructions one by one using  subprograms that
tell it how to execute each kind of instruction.  For example, LISP
and Prolog interpreters running on conventional computers execute
LISP or Prolog programs.

	The distinction between the hardware and the processes running
on the hardware is analogous to the mind-body distinction in humans.
However, the computer science distinctions are easier to understand,
because we can find or build systems that emphasize almost any distinctions
we want to make.

	From this we can conclude that no single kind of mind-body
distinction is to be preferred on purely logical grounds.  For many
purposes, no distinction at all is needed.  Understanding some systems
that could be built may require distinctions that no-one has so far
thought to make.

	John Searle's Chinese room example may be thought of in these
terms.  Let us take it in the pure form that the man has memorized the
instructions for processing Chinese characters that permit an arbitrary
intelligent conversation in Chinese.  Therefore, he doesn't need a
physical book of rules.  The man is thus acting as an interpreter for
the set of rules.  Whatever Chinese personality may be considered
necessary for the intelligent conversation is embodied in the set of
rules and the mental process that is executing them.  In my opinion
writing such a set of rules would require simulating most of human
personality and would therefore require understanding it --- a matter
far beyond the present state of psychology.  The interesting distinction
that has to be made is between the ``primary'' personality of the man
and the Chinese personality that his interpretation of the rules gives
rise to.  If it were easier than it is to overcome the practical
difficulties of understanding personality well enough to write such
rules, memorizing the large body of information that would be required,
and executing the memorized rules at an interesting speed, then perhaps
the phenomenon would occur often enough so that our language would have
words that make the necessary distinctions.


Ascribing (Weak) Mental Qualities to Machines

	Daniel Dennett (1971) and McCarthy (1979) advocate ascribing
beliefs, desires and other mental qualities to machines when this
helps understand their behavior in connection with the other information
we have about them.  Presumably supposing that this would help his
customers use it successfully, the manufacturer of an electric blanket
included the following in his instruction sheet.

``''

	Because of the vigor of the objections from some quarters,
especially from John Searle, I'll refer to weak mental qualities in
this note.  Weak mental qualities are properties of the internal
state of a machine or organism that correlate with behavior in
certain of the ways that the ``genuine'' mental qualities of a
human correlate with his behavior.  We don't require that weak
mental qualities be part of any complete set.  Thus it could have
a few weak beliefs and goals and not much else.

	Of course, it is interesting to know what constitutes
the full set of mental qualities possessed by humans, but I suspect
that they develop with age in the individual and with culture in
society.  For example, a baby learns to say no at about the age of
two, even though before that he doesn't always accept suggestions.
Being able to say no seems to require reifying the proposal so it
can be thought of as an object to be accepted or rejected.  Before
that it may behave merely as one of several competing courses of action.

	An electric blanket with a simple thermostat is a convenient
initial example, because it is easy to understand both in physical
and weak mentalistic terms, and we can see the relation between
the two.  It has been argued that all mentalistic terms should be
avoided in considering systems that don't actually require their use
in order to express our knowledge of their behavior.  However, it
seems to me that this would be like omitting 0 and 1 from the
number system on the grounds that we don't need them to understand
the null set and sets with one element.  Our number system
includes 0 and 1, because their inclusion clarifies the system.
Likewise our understanding of the relation between the physical
and weak mentalistic properties of humans and machines is
clarified by including the simple cases.  Excluding them would
require arbitrary and ugly rules like those that would be required
to exclude 0 and 1.

	In ascribing weak beliefs and goals to the thermostat,
we have some choices to make.  The most parsimonious system,
not necessarily the best for all purposes, ascribes one of
three possible beliefs \{too hot, too cold, okl\}.  I will now
argue the example of the simple thermostat is continuous with
much more complex temperature systems whose understanding by
their users or even designers will require intentional concepts.

	Imagine that the system must control the temperature of
the whole house room by room.  It takes into account the preferences
of the different occupants of the house so that the temperature
of a particular room is to depend on who is in it at the moment.
It also anticipates predicted changes in weather suitably adjusting
blinds, etc.  I contend that its users, especially children, will
want to think of it intentionally.  For example, one might say to
a child, ``If you carry grandpa's pipe into your room it will think
he came with you and will make the room as warm as he likes it.''
A malfunction may be described as a mistaken belief.  The designer
will anticipate possible mistakes and provide for checking or
reference to the people in the house.

	The distinction between weak mental qualities and genuine
mental qualities isn't clear to me.